Information systems researchers, like those in many other disciplines in the social sciences, have debated the value and appropriateness of using students as research subjects. This debate appears in several articles that have been published on the subject as well as in the review process. In this latter arena, however, the debate has become increasingly like a script —the actors (authors and reviewers) simply read their parts of the script; some avoid the underlying issues whereas others cursorily address generalizability without real consideration of those issues. As a result, despite the extent of debate, we seem no closer to a resolution. Authors who use student subjects rely on their scripted arguments to justify the use of student subjects and do not always consider whether those arguments are valid. But reviewers who oppose the use of student subjects are equally culpable. They, too, rely on scripted arguments to criticize work using student subjects, and do not always consider whether those arguments are salient to the particular study. By presenting and reviewing one version of this script in the context of theoretical discussions of generalizability, we hope to demonstrate its limitations so that we can move beyond these scripted arguments into a more meaningful discussion. To do this, we review empirical studies from the period 1990-2010 to examine the extent to which student subjects are being used in the field and to critically assess the discussions within the field about the use of student samples. We conclude by presenting recommendations for authors and reviewers, for determining whether the use of students is appropriate in a particular context, and for presenting and discussing work that uses student subjects.
Acognitive learning perspective is used to develop and test a model of the relationship between information acquisition and learning in the executive support systems (ESS) context. The model proposes two types of learning: mental model maintenance in which new information fits into existing mental models and confirms them; and mental model building in which mental models are changed to accommodate new information. It also proposes that information acquisition objectives determine the type of learning that is possible. When ESS are used to answer specific questions or solve well-defined problems, they help to fine-tune operations and verify assumptions-in other words, they help to maintain current mental models. However, ESS may be able to challenge fundamental assumptions and help to build new mental models if executives scan through them to help formulate problems and foster creativity. Thirty-six interviews with executive ESS users at seven organizations and a survey of 361 users at 18 additional organizations are used to develop scales to measure the model's constructs arid provide support for its relationships. These results support the model's prediction that mental model building is more likely with scanning than with focused search. ESS also appear to con tribute to mental model maintenance much more often than they do to mental model building. Without a clear focus on mental model building, it seems that business as usual is the more likely outcome.